Skip to content

Conversation

@CISC
Copy link
Collaborator

@CISC CISC commented Dec 4, 2025

These are not stable and are causing severe CI congestion, disable until whatever the issue is is resolved.

@CISC CISC requested a review from ggerganov December 4, 2025 09:03
@github-actions github-actions bot added the devops improvements to build systems and github actions label Dec 4, 2025
@CISC CISC merged commit 7dba049 into master Dec 4, 2025
66 of 67 checks passed
@CISC CISC deleted the cisc/ci-disable-ggml-ci-x64-amd branch December 4, 2025 10:25
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Dec 4, 2025
* origin/master:
server: strip content-length header on proxy (ggml-org#17734)
server: move msg diffs tracking to HTTP thread (ggml-org#17740)
examples : add missing code block end marker [no ci] (ggml-org#17756)
common : skip model validation when --help is requested (ggml-org#17755)
ggml-cpu : remove asserts always evaluating to false (ggml-org#17728)
convert: use existing local chat_template if mistral-format model has one. (ggml-org#17749)
cmake : simplify build info detection using standard variables (ggml-org#17423)
ci : disable ggml-ci-x64-amd-* (ggml-org#17753)
common: use native MultiByteToWideChar (ggml-org#17738)
metal : use params per pipeline instance (ggml-org#17739)
llama : fix sanity checks during quantization (ggml-org#17721)
build : move _WIN32_WINNT definition to headers (ggml-org#17736)
build: enable parallel builds in msbuild using MTT (ggml-org#17708)
ggml-cpu: remove duplicate conditional check 'iid' (ggml-org#17650)
Add a couple of file types to the text section (ggml-org#17670)
convert : support latest mistral-common (fix conversion with --mistral-format) (ggml-org#17712)
Use OpenAI-compatible `/v1/models` endpoint by default (ggml-org#17689)
webui: Fix zero pasteLongTextToFileLen to disable conversion being overridden (ggml-org#17445)
0Marble pushed a commit to 0Marble/llama.cpp that referenced this pull request Dec 18, 2025
Anico2 added a commit to Anico2/llama.cpp that referenced this pull request Jan 15, 2026
blime4 pushed a commit to blime4/llama.cpp that referenced this pull request Feb 5, 2026
@jiachengjason
Copy link
Contributor

Hi @ggerganov and @CISC what was the stability issue that you guys ran into? I am currently looking into bring back the CI tests for amd devices again.

@CISC
Copy link
Collaborator Author

CISC commented Feb 9, 2026

Hi @ggerganov and @CISC what was the stability issue that you guys ran into? I am currently looking into bring back the CI tests for amd devices again.

Unsure, they would gradually get slower and slower until they froze completely.

@ggerganov
Copy link
Member

@jiachengjason Last year I rented a couple of AMD V710 GPUs from Azure cloud (more info in: #16249 and #17303). The problem was that the performance was unusually slow and the instances kept hanging.

If you can get these same workflows running stable on some AMD hardware and you can provision self-hosted runners, we can add them to the CI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

devops improvements to build systems and github actions

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants